FAIRWASHING是指通过HOC解释操纵来解释不公平的黑匣子模型的风险。在本文中,我们通过分析其忠诚 - 不公平权衡来调查私人洗脱袭击的能力。特别是,我们表明,巧巧式化的解释模型可以概括起诉组(即,正在解释的数据点),这意味着巧式化的解释者可用于合理化黑匣子模型的后续不公平决策。我们还表明,私人洗涤攻击可以跨黑盒式机型转移,这意味着其他黑匣子型号可以在不明确使用他们的预测的情况下进行私人洗手。这种泛化和私伤攻击的可转移性意味着他们的检测在实践中难以实现。最后,我们提出了一种方法来量化私人洗礼的风险,这是基于高保真解释者的不公平性范围的计算。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Simulating quantum channels is a fundamental primitive in quantum computing, since quantum channels define general (trace-preserving) quantum operations. An arbitrary quantum channel cannot be exactly simulated using a finite-dimensional programmable quantum processor, making it important to develop optimal approximate simulation techniques. In this paper, we study the challenging setting in which the channel to be simulated varies adversarially with time. We propose the use of matrix exponentiated gradient descent (MEGD), an online convex optimization method, and analytically show that it achieves a sublinear regret in time. Through experiments, we validate the main results for time-varying dephasing channels using a programmable generalized teleportation processor.
translated by 谷歌翻译
Timely and effective feedback within surgical training plays a critical role in developing the skills required to perform safe and efficient surgery. Feedback from expert surgeons, while especially valuable in this regard, is challenging to acquire due to their typically busy schedules, and may be subject to biases. Formal assessment procedures like OSATS and GEARS attempt to provide objective measures of skill, but remain time-consuming. With advances in machine learning there is an opportunity for fast and objective automated feedback on technical skills. The SimSurgSkill 2021 challenge (hosted as a sub-challenge of EndoVis at MICCAI 2021) aimed to promote and foster work in this endeavor. Using virtual reality (VR) surgical tasks, competitors were tasked with localizing instruments and predicting surgical skill. Here we summarize the winning approaches and how they performed. Using this publicly available dataset and results as a springboard, future work may enable more efficient training of surgeons with advances in surgical data science. The dataset can be accessed from https://console.cloud.google.com/storage/browser/isi-simsurgskill-2021.
translated by 谷歌翻译
In recent years, the performance of novel view synthesis using perspective images has dramatically improved with the advent of neural radiance fields (NeRF). This study proposes two novel techniques that effectively build NeRF for 360{\textdegree} omnidirectional images. Due to the characteristics of a 360{\textdegree} image of ERP format that has spatial distortion in their high latitude regions and a 360{\textdegree} wide viewing angle, NeRF's general ray sampling strategy is ineffective. Hence, the view synthesis accuracy of NeRF is limited and learning is not efficient. We propose two non-uniform ray sampling schemes for NeRF to suit 360{\textdegree} images - distortion-aware ray sampling and content-aware ray sampling. We created an evaluation dataset Synth360 using Replica and SceneCity models of indoor and outdoor scenes, respectively. In experiments, we show that our proposal successfully builds 360{\textdegree} image NeRF in terms of both accuracy and efficiency. The proposal is widely applicable to advanced variants of NeRF. DietNeRF, AugNeRF, and NeRF++ combined with the proposed techniques further improve the performance. Moreover, we show that our proposed method enhances the quality of real-world scenes in 360{\textdegree} images. Synth360: https://drive.google.com/drive/folders/1suL9B7DO2no21ggiIHkH3JF3OecasQLb.
translated by 谷歌翻译
We present a lightweight post-processing method to refine the semantic segmentation results of point cloud sequences. Most existing methods usually segment frame by frame and encounter the inherent ambiguity of the problem: based on a measurement in a single frame, labels are sometimes difficult to predict even for humans. To remedy this problem, we propose to explicitly train a network to refine these results predicted by an existing segmentation method. The network, which we call the P2Net, learns the consistency constraints between coincident points from consecutive frames after registration. We evaluate the proposed post-processing method both qualitatively and quantitatively on the SemanticKITTI dataset that consists of real outdoor scenes. The effectiveness of the proposed method is validated by comparing the results predicted by two representative networks with and without the refinement by the post-processing network. Specifically, qualitative visualization validates the key idea that labels of the points that are difficult to predict can be corrected with P2Net. Quantitatively, overall mIoU is improved from 10.5% to 11.7% for PointNet [1] and from 10.8% to 15.9% for PointNet++ [2].
translated by 谷歌翻译
Although sketch-to-photo retrieval has a wide range of applications, it is costly to obtain paired and rich-labeled ground truth. Differently, photo retrieval data is easier to acquire. Therefore, previous works pre-train their models on rich-labeled photo retrieval data (i.e., source domain) and then fine-tune them on the limited-labeled sketch-to-photo retrieval data (i.e., target domain). However, without co-training source and target data, source domain knowledge might be forgotten during the fine-tuning process, while simply co-training them may cause negative transfer due to domain gaps. Moreover, identity label spaces of source data and target data are generally disjoint and therefore conventional category-level Domain Adaptation (DA) is not directly applicable. To address these issues, we propose an Instance-level Heterogeneous Domain Adaptation (IHDA) framework. We apply the fine-tuning strategy for identity label learning, aiming to transfer the instance-level knowledge in an inductive transfer manner. Meanwhile, labeled attributes from the source data are selected to form a shared label space for source and target domains. Guided by shared attributes, DA is utilized to bridge cross-dataset domain gaps and heterogeneous domain gaps, which transfers instance-level knowledge in a transductive transfer manner. Experiments show that our method has set a new state of the art on three sketch-to-photo image retrieval benchmarks without extra annotations, which opens the door to train more effective models on limited-labeled heterogeneous image retrieval tasks. Related codes are available at https://github.com/fandulu/IHDA.
translated by 谷歌翻译
超级解决全球气候模拟的粗略产出,称为缩减,对于需要长期气候变化预测的系统做出政治和社会决策至关重要。但是,现有的快速超分辨率技术尚未保留气候数据的空间相关性,这在我们以空间扩展(例如运输基础设施的开发)处理系统时尤其重要。本文中,我们展示了基于对抗性的网络的机器学习,使我们能够在降尺度中正确重建区域间空间相关性,并高达五十,同时保持像素统计的一致性。与测量的温度和降水分布的气象数据的直接比较表明,整合气候上重要的物理信息对于准确的缩减至关重要,这促使我们称我们的方法称为$ \ pi $ srgan(物理学知情的超级分辨率生成生成的对手网络)。本方法对气候变化影响的区域间一致评估具有潜在的应用。
translated by 谷歌翻译
360 {\ deg}图像是有益的 - 它包含相机周围的全向视觉信息。但是,覆盖360 {\ deg}图像的区域比人类的视野大得多,因此在不同视图方向上的重要信息很容易被忽略。为了解决此问题,我们提出了一种使用视觉显着性作为线索来预测单个360 {\ deg}图像中最佳区域(ROI)集合的方法。为了处理现有的单个360 {\ deg}图像显着性预测数据集的稀缺,有偏见的训练数据,我们还提出了基于球形随机数据旋转的数据增强方法。从预测的显着图和冗余候选区域,我们获得了最佳的ROI集合,考虑到区域内的显着性和区域之间的相互作用(IOU)。我们进行主观评估,以表明所提出的方法可以选择正确汇总输入360 {\ deg}图像的区域。
translated by 谷歌翻译
无监督的域适应性(UDA)是解决一个问题的关键技术之一,很难获得监督学习所需的地面真相标签。通常,UDA假设在培训过程中可以使用来自源和目标域中的所有样本。但是,在涉及数据隐私问题的应用下,这不是现实的假设。为了克服这一限制,最近提出了无源数据的UDA,即无源无监督的域适应性(SFUDA)。在这里,我们提出了一种用于医疗图像分割的SFUDA方法。除了在UDA中通常使用的熵最小化方法外,我们还引入了一个损失函数,以避免目标域中的特征规范和在保留目标器官的形状约束之前。我们使用数据集进行实验,包括多种类型的源目标域组合,以显示我们方法的多功能性和鲁棒性。我们确认我们的方法优于所有数据集中的最先进。
translated by 谷歌翻译